69 research outputs found
Analysis of Basis Pursuit Via Capacity Sets
Finding the sparsest solution for an under-determined linear system
of equations is of interest in many applications. This problem is
known to be NP-hard. Recent work studied conditions on the support size of
that allow its recovery using L1-minimization, via the Basis Pursuit
algorithm. These conditions are often relying on a scalar property of
called the mutual-coherence. In this work we introduce an alternative set of
features of an arbitrarily given , called the "capacity sets". We show how
those could be used to analyze the performance of the basis pursuit, leading to
improved bounds and predictions of performance. Both theoretical and numerical
methods are presented, all using the capacity values, and shown to lead to
improved assessments of the basis pursuit success in finding the sparest
solution of
Sparsity without the Complexity: Loss Localisation using Tree Measurements
We study network loss tomography based on observing average loss rates over a
set of paths forming a tree -- a severely underdetermined linear problem for
the unknown link loss probabilities. We examine in detail the role of sparsity
as a regularising principle, pointing out that the problem is technically
distinct from others in the compressed sensing literature. While sparsity has
been applied in the context of tomography, key questions regarding uniqueness
and recovery remain unanswered. Our work exploits the tree structure of path
measurements to derive sufficient conditions for sparse solutions to be unique
and the condition that minimization recovers the true underlying
solution. We present a fast single-pass linear algorithm for
minimization and prove that a minimum solution is both unique and
sparsest for tree topologies. By considering the placement of lossy links
within trees, we show that sparse solutions remain unique more often than is
commonly supposed. We prove similar results for a noisy version of the problem
The road to deterministic matrices with the restricted isometry property
The restricted isometry property (RIP) is a well-known matrix condition that
provides state-of-the-art reconstruction guarantees for compressed sensing.
While random matrices are known to satisfy this property with high probability,
deterministic constructions have found less success. In this paper, we consider
various techniques for demonstrating RIP deterministically, some popular and
some novel, and we evaluate their performance. In evaluating some techniques,
we apply random matrix theory and inadvertently find a simple alternative proof
that certain random matrices are RIP. Later, we propose a particular class of
matrices as candidates for being RIP, namely, equiangular tight frames (ETFs).
Using the known correspondence between real ETFs and strongly regular graphs,
we investigate certain combinatorial implications of a real ETF being RIP.
Specifically, we give probabilistic intuition for a new bound on the clique
number of Paley graphs of prime order, and we conjecture that the corresponding
ETFs are RIP in a manner similar to random matrices.Comment: 24 page
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Distributed Block Coordinate Descent for Minimizing Partially Separable Functions
In this work we propose a distributed randomized block coordinate descent
method for minimizing a convex function with a huge number of
variables/coordinates. We analyze its complexity under the assumption that the
smooth part of the objective function is partially block separable, and show
that the degree of separability directly influences the complexity. This
extends the results in [Richtarik, Takac: Parallel coordinate descent methods
for big data optimization] to a distributed environment. We first show that
partially block separable functions admit an expected separable
overapproximation (ESO) with respect to a distributed sampling, compute the ESO
parameters, and then specialize complexity results from recent literature that
hold under the generic ESO assumption. We describe several approaches to
distribution and synchronization of the computation across a cluster of
multi-core computers and provide promising computational results.Comment: in Recent Developments in Numerical Analysis and Optimization, 201
Recent Developments in Algorithmic Teaching
Abstract. The present paper surveys recent developments in algorith-mic teaching. First, the traditional teaching dimension model is recalled. Starting from the observation that the teaching dimension model some-times leads to counterintuitive results, recently developed approaches are presented. Here, main emphasis is put on the following aspects derived from human teaching/learning behavior: the order in which examples are presented should matter; teaching should become harder when the memory size of the learners decreases; teaching should become easier if the learners provide feedback; and it should be possible to teach infinite concepts and/or finite and infinite concept classes. Recent developments in the algorithmic teaching achieving (some) of these aspects are presented and compared.
The Automated Design of Parts Orienters
This paper concerns the design of parts orienters - the dual to the motion planning problem. Three particular paradigms are considered and their abstractions to the computational domain lead to interesting problems in graph pebbling and function composition on finite sets. Polynomial time algorithms are developed for the abstracted problems
- …